Language-Inspired Approaches to Phoneme Classification
Linguistics-inspired strategies for leveraging phonetic features in LibriBrain MEG data, covering feature taxonomies, diphthong handling, and feature-to-phoneme conversion pipelines.
Explore our latest research and discoveries in neural processing and brain-computer interfaces
Linguistics-inspired strategies for leveraging phonetic features in LibriBrain MEG data, covering feature taxonomies, diphthong handling, and feature-to-phoneme conversion pipelines.
Neuroscience-informed approaches to speech detection for the LibriBrain competition, including STG sensor analysis, spatial and temporal strategies, and architectural recommendations.
Exploring the Speech Detection task and the reference model architecture used in the LibriBrain competition, including insights into our 'Start Simple' approach and future research directions.
Exploring the motivation behind the 2025 PNPL Competition and how we're building towards non-invasive speech brain-computer interfaces through deep MEG datasets and collaborative research.
Linguistics-inspired strategies for leveraging phonetic features in LibriBrain MEG data, covering feature taxonomies, diphthong handling, and feature-to-phoneme conversion pipelines.
Neuroscience-informed approaches to speech detection for the LibriBrain competition, including STG sensor analysis, spatial and temporal strategies, and architectural recommendations.
Exploring the Speech Detection task and the reference model architecture used in the LibriBrain competition, including insights into our 'Start Simple' approach and future research directions.
Exploring the motivation behind the 2025 PNPL Competition and how we're building towards non-invasive speech brain-computer interfaces through deep MEG datasets and collaborative research.
Linguistics-inspired strategies for leveraging phonetic features in LibriBrain MEG data, covering feature taxonomies, diphthong handling, and feature-to-phoneme conversion pipelines.
Neuroscience-informed approaches to speech detection for the LibriBrain competition, including STG sensor analysis, spatial and temporal strategies, and architectural recommendations.
Exploring the Speech Detection task and the reference model architecture used in the LibriBrain competition, including insights into our 'Start Simple' approach and future research directions.
Exploring the motivation behind the 2025 PNPL Competition and how we're building towards non-invasive speech brain-computer interfaces through deep MEG datasets and collaborative research.
Take a look at some of our recent work in the field of neural signal processing and brain-computer interfaces.
tl;dr: Competition framework for advancing speech decoding from non-invasive brain data using the LibriBrain dataset. Cite this for the LibriBrain competition.
tl;dr: The largest single-subject MEG dataset to date for speech decoding, with over 50 hours of recordings. Cite this for the LibriBrain dataset.
tl;dr: Advances in non-invasive brain-to-text technology with LLM-based rescoring and predictive in-filling approaches.
tl;dr: Breakthrough in scaling speech decoding models across subjects using self-supervised learning techniques.
A multidisciplinary team of researchers, engineers, and students advancing the frontiers of neural processing and brain-computer interfaces.
Potential doctoral candidates are encouraged to apply both to the Department of Engineering Science and to the AIMS programme.
Potential collaborators are encouraged to reach out directly to the PI, Oiwi Parker Jones.